Personnel
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Sensor-based Robot Control

Determining Singularity Configurations in IBVS

Participant : François Chaumette.

This theoretical study has been achieved through an informal collaboration with Sébastien Briot and Philippe Martinet from LS2N in Nantes, France. It concerned the determination of the singularity configurations of image-based visual servoing using tools from the mechanical engineering community and the concept of “hidden” robot. In a first step, we have revisited the well-known case of using three image points as visual feature, and then solved the general case of n image points [16]. The case of three image straight lines has also been solved for the first time [17].

We have also designed a control scheme in order to avoid these singularities during the execution of a visual servoing scheme [38].

Visual Servoing through Mirror Reflection

Participants : François Chaumette, Eric Marchand.

Apart the use of catadioptric cameras, only few visual servoing works exploit the use of mirror. Such a configuration is however interesting since it allows overpassing the limited camera field of view. Based on the known projection equations involved in such a system, we studied the theoretical background that allows the control of planar mirror for visual servoing in different configurations. Limitations intrinsic to such systems, such as the number of DoF actually controllable, have been studied. The case of point feature was considered in [51] and this has been extended to line in [52].

Visual Servoing of Humanoid Robots

Participants : Giovanni Claudio, Fabien Spindler, François Chaumette.

This study is realized in the scope of the BPI Romeo 2 and H2020 Comanoid projects (see Sections 9.2.7 and 9.3.1.2).

We have designed the modeling of the visual features at the acceleration level to embed visual tasks and visual constraints in an existing Quadratic Programming controller [13]. Experimental results have been obtained on Romeo (see Section 6.8.4).

Model Predictive Visual Servoing

Participants : Paolo Robuffo Giordano, François Chaumette.

This study was realized in collaboration with Pierre-Brice Wieber, from Bipop group at Inria Rhône Alpes, through the co-supervision of Nicolas Cazy's Ph.D.

Model Predictive Control (MPC) is a powerful control framework able to take explicitly into account the presence of constraints in the controlled system (e.g., actuator saturations, sensor limitations, and so on). In this study, we studied the possibility of using MPC for tackling one of the most classical constraints of visual servoing applications, that is, the possibility to lose tracking of features because of occlusions, limited camera field of view, or imperfect image processing/tracking. The MPC framework depends upon the possibility to predict the future evolution of the controlled system over some time horizon, for correcting the current state of the modeled system whenever new information (e.g., new measurements) become available. We have also explored the possibility of applying these ideas in a multi-robot collaboration scenario where a UAV with a downfacing camera (with limited field of view) needs to provide localization services to a team of ground robots [41].

Model Predictive Control for Visual Servoing of a UAV

Participants : Bryan Penin, François Chaumette, Paolo Robuffo Giordano.

Visual servoing is a well-known class of techniques meant to control the pose of a robot from visual input by considering an error function directly defined in the image (sensor) space. These techniques are particularly appealing since they do not require, in general, a full state reconstruction, thus granting more robustness and lower computational loads. However, because of the quadrotor underactuation and inherent sensor limitations (mainly limited camera field of view), extending the classical visual servoing framework to the quadrotor flight control is not straightforward. For instance, for realizing a horizontal displacement the quadrotor needs to tilt in the desired direction. This tilting, however, will cause any downlooking camera to point in the opposite direction with, e.g., possible loss of feature tracking because of the limited camera field of view.

In order to cope with these difficulties and achieve a high-performance visual servoing of quadrotor UAVs, we chose to rely on MPC for explicitly dealing with this kind of constraints during flight. We have recently considered the problem of controlling in minimum-time a quadrotor UAV equipped with a downlooking camera that needs to reach a desired pose w.r.t. a target on the ground from visual input. The control problem is solved by an online replanning strategy that is able to generate (at camera rate) minimum-time trajectories towards the final pose while coping with actuation constraints (limited propeller thrusts) and sensing constraints (target always in the camera fov). By exploiting the camera images during motion, the replanning strategy is able to adjust online the optimal trajectory and, thus, be robust against unmodeled effects and other disturbances (which can be typically expected on a quadrotor flying aggressively). The approach has been validated via numerical simulations in [59]. We are now working towards an experimental validation, as well as novel algorithmic extensions allowing for the possibility of temporarily losing sight of the target object for relaxing the visibility constraint (and, thus, gain in maneuverability).

UAVs in Physical Interaction with the Environment

Participants : Quentin Delamare, Paolo Robuffo Giordano.

Most research in UAVs deals with either contact-free cases (the UAVs must avoid any contact with the environment), or in “static” contact cases (the UAVs need to exert some forces on the environment in quasi-static conditions, reminiscent of what has been done with manipulator arms). Inspired by the vast literature on robot locomotion (from, e.g., the humanoid community), in this research topic we aim at exploiting the contact with the environment for helping a UAV maneuvering in the environment, in the same spirit in which we humans (and, supposedly, humanoid robots) use our legs and arms when navigating in cluttered environments for helping in keeping balance, or perform maneuvers that would be, otherwise, impossible.

As an initial case study, we have considered a planar UAV equipped with a 1 DoF actuated arm capable of hooking at some pivots in the environment. This UAV (named MonkeyRotor) needs to “jump” from one pivot to the next one by exploiting the forces exchanged with the environment (the pivot) and its own actuation system (the propellers). This study considers the full dynamics in both cases (hooked, free-flying), proposes an optimization problem for finding optimal trajectories from an initial hooked configuration to the next one, and validates the approach in simulation. We are now working towards a physical realization of a first prototype. This activity is done in cooperation with LAAS-CNRS (Dr. Antonio Franchi who is co-supervising Quentin Delamare).

Visual Servoing for Steering Simulation Agents

Participants : Axel Lopez Gandia, Eric Marchand, François Chaumette, Julien Pettré.

Steering is one of the basic functionality of any character animation system. It provides characters with the ability to locally move in the environment so as to achieve basic navigation tasks, such as reaching a goal, avoiding a collision with an obstacles, etc. This problem has been explored in various contexts (e.g., motion planning, autonomous characters or crowd simulation). It turned out that this component plays an important role on the quality of character animation and received a lot of attention. Many important steps have been taken to improve steering techniques: potential fields, sets of attractive and repulsive forces, linear programming in the velocity space, local optimization of navigation functions, etc. Each new category of approach leads to characters close to forming realistic trajectories when achieving navigation.

Nevertheless, all these techniques remain quite far from the way real humans form their locomotion trajectory, because they are all based on kinematics and geometry. Humans obviously do not solve geometrical problems of this nature while moving in their environment but control their motion from perceptual features, and more especially visual features they perceive from the environment. For simulating more accurately the perception-action loop used by humans to navigate in their environment, we developed a technique which provides characters with vision capabilities, by equipping them with a virtual retina on which we project information about their surroundings. In a first version, we projected information about the relative motion of objects around them, allowing characters to estimate the risk of collision they face, and to move so as to minimize this risk [21]. More recently, we projected a purely visual information, and we established the relations that exist between the visual features characters perceive and the motion they perform. This way, we are able to steer characters so as their visual flow satisfies some conditions, allowing them for example to reach a goal while avoiding surrounding obstacles, could they be static or moving.

Direct Visual Servoing

Participants : Quentin Bateux, Eric Marchand.

We have proposed a deep neural network-based method to perform high-precision, robust and real-time 6 DoF visual servoing [63]. We studied how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions.